Goto

Collaborating Authors

 ai learn


AI Learns to Speak Like a Baby

TIME - Tech

Imagine seeing the world through the eyes of a six-month-old child. You don't have the words to describe anything. How could you possibly begin to understand language, when each sound that comes out of the mouths of those around you has an almost infinite number of potential meanings? This question has led many scientists to hypothesize that humans must have some intrinsic language facility to help us get started in acquiring language. But a paper published in Science this week found that a relatively simple AI system fed with data filmed from a baby's-eye view began to learn words.


Indigenous groups fear culture distortion as AI learns their languages

The Japan Times

When U.S. tech firm OpenAI rolled out Whisper, a speech recognition tool offering audio transcription and translation into English for dozens of languages including Maori, it rang alarm bells for many Indigenous New Zealanders. Whisper, launched in September by the company behind the ChatGPT chatbot, was trained on 680,000 hours of audio from the web, including 1,381 hours of the Maori language. Indigenous tech and culture experts say that while such technologies can help preserve and revive their languages, harvesting their data without consent risks abuse, distorting of Indigenous culture, and depriving minorities of their rights. This could be due to a conflict with your ad-blocking or security software. Please add japantimes.co.jp and piano.io to your list of allowed sites.


Your selfies are helping AI learn. You did not consent to this.

Washington Post - Technology News

Maybe that sounds like a utopian fantasy. You have gotten used to the feeling that once you put digital bits of yourself or your loved ones online, you lose control of what happens next. Dryhurst told me that with publicly available AI, such as Dall-E and ChatGPT, getting a lot of attention but still imperfect, this is an ideal time to reestablish what real personal consent should be for the AI age. And he said that some influential AI organizations are open to this, too.


AI learns how to recognise the species of splatted mosquitoes

New Scientist

Artificial intelligence trained to recognise both living and dead mosquitoes could help track the three species most responsible for transmitting mosquito-borne diseases. Mosquitoes kill more people than any other animal because they can transmit diseases such as dengue, malaria and Zika virus fever. Using AI to automatically identify different mosquito species could make it easier to track their presence worldwide – but such an AI needs many images of mosquitoes to learn from. Song-Quan Ong at the Institute for Tropical Biology and Conservation in Malaysia and his colleague recruited three volunteers to help them image yellow fever mosquitoes (Aedes aegypti), Asian tiger mosquitoes (Aedes albopictus) and southern house mosquitoes (Culex quinquefasciatus). The researchers took two photos of each mosquito that landed on the volunteers' hands: one right after it landed and another after it was splatted.


AI Learns What an Infant Knows about the Physical World

#artificialintelligence

If I drop a pen, you know that it won't hover in midair but will fall to the floor. Similarly, if the pen encounters a desk on its way down, you know it won't travel through the surface but will instead land on top. These fundamental properties of physical objects seem intuitive to us. Infants as young as three months know that a ball no longer in sight still exists and that the ball can't teleport from behind the couch to the top of the refrigerator. Despite mastering complex games, such as chess and poker, artificial intelligence systems have yet to demonstrate the "commonsense" knowledge that an infant is either born with or picks up seemingly without effort in their first few months.


AI Learns What an Infant Knows about the Physical World

#artificialintelligence

If I drop a pen, you know that it won't hover in midair but will fall to the floor. Similarly, if the pen encounters a desk on its way down, you know it won't travel through the surface but will instead land on top. These fundamental properties of physical objects seem intuitive to us. Infants as young as three months know that a ball no longer in sight still exists and that the ball can't teleport from behind the couch to the top of the refrigerator. Despite mastering complex games, such as chess and poker, artificial intelligence systems have yet to demonstrate the "commonsense" knowledge that an infant is either born with or picks up seemingly without effort in their first few months.


AI learns how to play Minecraft by watching videos - AI News

#artificialintelligence

Open AI has trained a neural network to play Minecraft by Video PreTraining (VPT) on a massive unlabeled video dataset of human Minecraft play, while using just a small amount of labeled contractor data. With a bit of fine-tuning, the AI research and deployment company is confident that its model can learn to craft diamond tools, a task that usually takes proficient humans over 20 minutes (24,000 actions). Its model uses the native human interface of keypresses and mouse movements, making it quite general, and represents a step towards general computer-using agents. A spokesperson for the Microsoft-backed firm said: "The internet contains an enormous amount of publicly available videos that we can learn from. You can watch a person make a gorgeous presentation, a digital artist draw a beautiful sunset, and a Minecraft player build an intricate house. However, these videos only provide a record of what happened but not precisely how it was achieved, i.e. you will not know the exact sequence of mouse movements and keys pressed. "If we would like to build large-scale foundation models in these domains as we've done in language with GPT, this lack of action labels poses a new challenge not present in the language domain, where "action labels" are simply the next words in a sentence." In order to utilise the wealth of unlabeled video data available on the internet, Open AI introduces a novel, yet simple, semi-supervised imitation learning method: Video PreTraining (VPT). The team begin by gathering a small dataset from contractors where it records not only their video, but also the actions they took, which in its case are keypresses and mouse movements. With this data the company can train an inverse dynamics model (IDM), which predicts the action being taken at each step in the video. Importantly, the IDM can use past and future information to guess the action at each step. The spokesperson added: "This task is much easier and thus requires far less data than the behavioral cloning task of predicting actions given past video frames only, which requires inferring what the person wants to do and how to accomplish it.


Can AI Learn to Forget?

Communications of the ACM

Machine learning has emerged as a valuable tool for spotting patterns and trends that might otherwise escape humans. The technology, which can build elaborate models based on everything from personal preferences to facial recognition, is used widely to understand behavior, spot patterns and trends, and make informed predictions. Yet for all the gains, there is also plenty of pain. A major problem associated with machine learning is that once an algorithm or model exists, expunging individual records or chunks of data is extraordinarily difficult. In most cases, it is necessary to retrain the entire model--sometimes with no assurance that that model will not continue to incorporate the suspect data in some way, says Gautam Kamath, an assistant professor in the David R. Cheriton School of Computer Science at the University of Waterloo in Canada.



Two Minute Papers: This AI Learns From Its Dreams

#artificialintelligence

The paper "World Models" is available here: https://arxiv.org/abs/1803.10122 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil.